hong kong
The tiny tuxedo cat who became a naval hero
A 17-year-old British sailor saved Simon from the Hong Kong docks when he was likely a year old. Breakthroughs, discoveries, and DIY tips sent six days a week. One day in March of 1948, George Hickinbottom, a British sailor, was walking around the docks of Stonecutters Island in Hong Kong. When the 17-year-old spotted a small black-and-white tuxedo cat, barely out of kittenhood, he decided to smuggle the hungry, scrawny animal aboard his ship, the HMS . Hickinbottom didn't get in trouble.
- Asia > China > Hong Kong (0.46)
- North America > United States > Oregon (0.05)
- North America > United States > Idaho (0.05)
- (7 more...)
- Asia > China > Hong Kong (0.06)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
Calling AI 'a gift from God,' Catholic bishops draft usage guidelines for Asia
Cardinal Stephen Chow, the bishop of Hong Kong, speaks during Mass in Hong Kong in November 2023. During the opening Mass a three-day event to draft guidelines for the clergy's use of artificial intelligence in the region Asia, he described AI as a gift from God. | REUTERS Catholic bishops and priests from across Asia are set to conclude a three-day event in Hong Kong on Friday, during which they drafted guidelines for the clergy's use of artificial intelligence in the region. The Federation of Asian Bishops -- a 55-year-old institution that includes representatives from across the region, including Indonesia, Taiwan, Sri Lanka and Japan -- discussed AI and its impact on humanity, the church, and how it can serve as a tool to conduct scripture searches. They also discussed the principles for use of AI in evangelization. The theme of the meetings was a call to embrace AI responsibly.
- Government (0.33)
- Media > News (0.31)
- Leisure & Entertainment (0.31)
- Information Technology > Communications > Social Media (0.80)
- Information Technology > Artificial Intelligence > Applied AI (0.77)
Directed evolution algorithm drives neural prediction
Wang, Yanlin, Young, Nancy M, Wong, Patrick C M
Neural prediction offers a promising approach to forecasting the individual variability of neurocognitive functions and disorders and providing prognostic indicators for personalized invention. However, it is challenging to translate neural predictive models into medical artificial intelligent applications due to the limitations of domain shift and label scarcity. Here, we propose the directed evolution model (DEM), a novel computational model that mimics the trial-and-error processes of biological directed evolution to approximate optimal solutions for predictive modeling tasks. We demonstrated that the directed evolution algorithm is an effective strategy for uncertainty exploration, enhancing generalization in reinforcement learning. Furthermore, by incorporating replay buffer and continual backpropagate methods into DEM, we provide evidence of achieving better trade-off between exploitation and exploration in continuous learning settings. We conducted experiments on four different datasets for children with cochlear implants whose spoken language developmental outcomes vary considerably on the individual-child level. Preoperative neural MRI data has shown to accurately predict the post-operative outcome of these children within but not across datasets. Our results show that DEM can efficiently improve the performance of cross-domain pre-implantation neural predictions while addressing the challenge of label scarcity in target domain.
- North America > United States > Illinois > Cook County > Chicago (0.08)
- Asia > China > Hong Kong (0.06)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Energy (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
- Health & Medicine > Health Care Technology (0.93)
- (2 more...)
Emotion-Enhanced Multi-Task Learning with LLMs for Aspect Category Sentiment Analysis
Chai, Yaping, Xie, Haoran, Qin, Joe S.
Aspect category sentiment analysis (ACSA) has achieved remarkable progress with large language models (LLMs), yet existing approaches primarily emphasize sentiment polarity while overlooking the underlying emotional dimensions that shape sentiment expressions. This limitation hinders the model's ability to capture fine-grained affective signals toward specific aspect categories. To address this limitation, we introduce a novel emotion-enhanced multi-task ACSA framework that jointly learns sentiment polarity and category-specific emotions grounded in Ekman's six basic emotions. Leveraging the generative capabilities of LLMs, our approach enables the model to produce emotional descriptions for each aspect category, thereby enriching sentiment representations with affective expressions. Furthermore, to ensure the accuracy and consistency of the generated emotions, we introduce an emotion refinement mechanism based on the Valence-Arousal-Dominance (VAD) dimensional framework. Specifically, emotions predicted by the LLM are projected onto a VAD space, and those inconsistent with their corresponding VAD coordinates are re-annotated using a structured LLM-based refinement strategy. Experimental results demonstrate that our approach significantly outperforms strong baselines on all benchmark datasets. This underlines the effectiveness of integrating affective dimensions into ACSA.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Asia > China > Hong Kong (0.06)
- Europe > Ukraine (0.04)
- (10 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
LLM-Driven Stationarity-Aware Expert Demonstrations for Multi-Agent Reinforcement Learning in Mobile Systems
Duan, Tianyang, Zhang, Zongyuan, Lin, Zheng, Guo, Songxiao, Guan, Xiuxian, Wu, Guangyu, Fang, Zihan, Meng, Haotian, Du, Xia, Zhou, Ji-Zhe, Cui, Heming, Luo, Jun, Gao, Yue
Multi-agent reinforcement learning (MARL) has been increasingly adopted in many real-world applications. While MARL enables decentralized deployment on resource-constrained edge devices, it suffers from severe non-stationarity due to the synchronous updates of agent policies. This non stationarity results in unstable training and poor policy con vergence, especially as the number of agents increases. In this paper, we propose RELED, a scalable MARL framework that integrates large language model (LLM)-driven expert demonstrations with autonomous agent exploration. RELED incorporates a Stationarity-Aware Expert Demonstration module, which leverages theoretical non-stationarity bounds to enhance the quality of LLM-generated expert trajectories, thus providing high reward and training-stable samples for each agent. Moreover, a Hybrid Expert-Agent Policy Optimization module adaptively balances each agent's learning from both expert-generated and agent-generated trajectories, accelerating policy convergence and improving generalization. Extensive experiments with real city networks based on OpenStreetMap demonstrate that RELED achieves superior performance compared to state-of-the-art MARL methods.
- Overview (0.93)
- Research Report (0.64)
- Transportation > Ground > Road (0.93)
- Transportation > Infrastructure & Services (0.67)
Pharos-ESG: A Framework for Multimodal Parsing, Contextual Narration, and Hierarchical Labeling of ESG Report
Chen, Yan, Zou, Yu, Zeng, Jialei, You, Haoran, Zhou, Xiaorui, Zhong, Aixi
Environmental, Social, and Governance (ESG) principles are reshaping the foundations of global financial gover- nance, transforming capital allocation architectures, regu- latory frameworks, and systemic risk coordination mecha- nisms. However, as the core medium for assessing corpo- rate ESG performance, the ESG reports present significant challenges for large-scale understanding, due to chaotic read- ing order from slide-like irregular layouts and implicit hier- archies arising from lengthy, weakly structured content. To address these challenges, we propose Pharos-ESG, a uni- fied framework that transforms ESG reports into structured representations through multimodal parsing, contextual nar- ration, and hierarchical labeling. It integrates a reading-order modeling module based on layout flow, hierarchy-aware seg- mentation guided by table-of-contents anchors, and a multi- modal aggregation pipeline that contextually transforms vi- sual elements into coherent natural language. The framework further enriches its outputs with ESG, GRI, and sentiment labels, yielding annotations aligned with the analytical de- mands of financial research. Extensive experiments on anno- tated benchmarks demonstrate that Pharos-ESG consistently outperforms both dedicated document parsing systems and general-purpose multimodal models. In addition, we release Aurora-ESG, the first large-scale public dataset of ESG re- ports, spanning Mainland China, Hong Kong, and U.S. mar- kets, featuring unified structured representations of multi- modal content, enriched with fine-grained layout and seman- tic annotations to better support ESG integration in financial governance and decision-making.
- Asia > China > Hong Kong (0.25)
- North America > Mexico > Gulf of Mexico (0.04)
- Asia > Taiwan (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Grammars & Parsing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions
Hong, Mengze, Jiang, Di, Zhao, Weiwei, Li, Yawen, Wang, Yihang, Luo, Xinyuan, Sun, Yanjie, Zhang, Chen Jason
While large language models (LLMs) offer promising capabilities for automating academic workflows, existing systems for academic peer review remain constrained by text-only inputs, limited contextual grounding, and a lack of actionable feedback. In this work, we present an interactive web-based system for multimodal, community-aware peer review simulation to enable effective manuscript revisions before paper submission. Our framework integrates textual and visual information through multimodal LLMs, enhances review quality via retrieval-augmented generation (RAG) grounded in web-scale OpenReview data, and converts generated reviews into actionable to-do lists using the proposed Action:Objective[\#] format, providing structured and traceable guidance. The system integrates seamlessly into existing academic writing platforms, providing interactive interfaces for real-time feedback and revision tracking. Experimental results highlight the effectiveness of the proposed system in generating more comprehensive and useful reviews aligned with expert standards, surpassing ablated baselines and advancing transparent, human-centered scholarly assistance.
Audio Driven Real-Time Facial Animation for Social Telepresence
Lee, Jiye, Li, Chenghui, Tran, Linh, Wei, Shih-En, Saragih, Jason, Richard, Alexander, Joo, Hanbyul, Bai, Shaojie
We present an audio-driven real-time system for animating photorealistic 3D facial avatars with minimal latency, designed for social interactions in virtual reality for anyone. Central to our approach is an encoder model that transforms audio signals into latent facial expression sequences in real time, which are then decoded as photorealistic 3D facial avatars. Leveraging the generative capabilities of diffusion models, we capture the rich spectrum of facial expressions necessary for natural communication while achieving real-time performance (<15ms GPU time). Our novel architecture minimizes latency through two key innovations: an online transformer that eliminates dependency on future inputs and a distillation pipeline that accelerates iterative denoising into a single step. We further address critical design challenges in live scenarios for processing continuous audio signals frame-by-frame while maintaining consistent animation quality. The versatility of our framework extends to multimodal applications, including semantic modalities such as emotion conditions and multimodal sensors with head-mounted eye cameras on VR headsets. Experimental results demonstrate significant improvements in facial animation accuracy over existing offline state-of-the-art baselines, achieving 100 to 1000 times faster inference speed. We validate our approach through live VR demonstrations and across various scenarios such as multilingual speeches.
- Asia > South Korea > Seoul > Seoul (0.40)
- Asia > China > Hong Kong (0.07)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (3 more...)
MMRHP: A Miniature Mixed-Reality HIL Platform for Auditable Closed-Loop Evaluation
Li, Mingxin, Hu, Haibo, Deng, Jinghuai, Xi, Yuchen, Chen, Xinhong, Wang, Jianping
Abstract--V alidation of autonomous driving systems requires a trade-off between test fidelity, cost, and scalability. While miniaturized hardware-in-the-loop (HIL) platforms have emerged as a promising solution, a systematic framework supporting rigorous quantitative analysis is generally lacking, limiting their value as scientific evaluation tools. T o address this challenge, we propose MMRHP, a miniature mixed-reality HIL platform that elevates miniaturized testing from functional demonstration to rigorous, reproducible quantitative analysis. The core contributions are threefold. First, we propose a systematic three-phase testing process oriented toward the Safety of the Intended Functionality (SOTIF) standard, providing actionable guidance for identifying the performance limits and triggering conditions of otherwise correctly functioning systems. Second, we design and implement a HIL platform centered around a unified spatiotemporal measurement core to support this process, ensuring consistent and traceable quantification of physical motion and system timing. Finally, we demonstrate the effectiveness of this solution through comprehensive experiments. The platform itself was first validated, achieving a spatial accuracy of 10.27 mm RMSE and a stable closed-loop latency baseline of approximately 45 ms. Subsequently, an in-depth Autoware case study leveraged this validated platform to quantify its performance baseline and identify a critical performance cliff at an injected latency of 40 ms. This work shows that a structured process, combined with a platform offering a unified spatio-temporal benchmark, enables reproducible, interpretable, and quantitative closed-loop evaluation of autonomous driving systems. Index T erms--Autonomous Driving, Hardware-in-the-Loop (HIL), Mixed Reality, CARLA, SOTIF, V alidation and V erifi-cation (V&V). HE commercial deployment of autonomous vehicles (A Vs) faces a critical bottleneck that has shifted from achieving basic functionality to delivering statistically convincing safety in long-tail scenarios [1].
- Asia > China > Hong Kong (0.05)
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Texas (0.04)
- (5 more...)